Goto

Collaborating Authors

 recurrent connection






Algorithm 1 Learning the external stimulus s Require: (x

Neural Information Processing Systems

Figure taken and adapted from [38]. Different works consider different properties. Compared to backpropagation (BP), predictive coding (PC) allows for more flexibility in the definition, training, and evaluation of the model. The experiments reported in this paper show the best results achieved on each specific task and, as a consequence, only the effects of a specific set of hyperparameters. Feedforward networks (left) simply overfit (i.e., reproduce without performing any modification) the input samples, despite being unrelated to the training data.


Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences

Neural Information Processing Systems

Here we show that place cells emerge in networks trained to remember temporally continuous sensory episodes. We model CA3 as a recurrent autoencoder that recalls and reconstructs sensory experiences from noisy and partially occluded observations by agents traversing simulated arenas.


Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences

Neural Information Processing Systems

Here we show that place cells emerge in networks trained to remember temporally continuous sensory episodes. We model CA3 as a recurrent autoencoder that recalls and reconstructs sensory experiences from noisy and partially occluded observations by agents traversing simulated arenas.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This submission describes a novel autoencoder method, that uses unsupervised learning to configure a recurrent network to encode both the current and past states of an input. I am not a mathematician nor machine learning expert, and thus am not qualified to review the work for technical merit. However, I have extensive experience in neural network modeling, and thus appreciate both the objective and purported accomplishments: the ability to train a recurrent network to store input sequences in an efficient manner using non-supervised learning. The authors describe a mechanism that addresses the problem by breaking it into two stages -- autoencoding, and then optimization -- that are carried out over different times scales.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: This paper deals with sampling methods based on linear rate-based neural-networks. First, it shows that symmetric weights (a common constraint in many models) significantly hurt the mixing rate. Then it shows that a (more physiological) non-normal network can have a much faster mixing rate, if the connectivity is optimized for this purpose. This works even if more biological constraints (Dale's law) are imposed.